肺癌是最致命的癌症之一,部分诊断和治疗取决于肿瘤的准确描绘。目前是最常见的方法的人以人为本的分割,须遵守观察者间变异性,并且考虑到专家只能提供注释的事实,也是耗时的。最近展示了有前途的结果,自动和半自动肿瘤分割方法。然而,随着不同的研究人员使用各种数据集和性能指标验证了其算法,可靠地评估这些方法仍然是一个开放的挑战。通过2018年IEEE视频和图像处理(VIP)杯竞赛创建的计算机断层摄影扫描(LOTUS)基准测试的肺起源肿瘤分割的目标是提供唯一的数据集和预定义的指标,因此不同的研究人员可以开发和以统一的方式评估他们的方法。 2018年VIP杯始于42个国家的全球参与,以获得竞争数据。在注册阶段,有129名成员组成了来自10个国家的28个团队,其中9个团队将其达到最后阶段,6队成功完成了所有必要的任务。简而言之,竞争期间提出的所有算法都是基于深度学习模型与假阳性降低技术相结合。三种决赛选手开发的方法表明,有希望的肿瘤细分导致导致越来越大的努力应降低假阳性率。本次竞争稿件概述了VIP-Cup挑战,以及所提出的算法和结果。
translated by 谷歌翻译
肺癌是全世界癌症死亡的主要原因,具有各种组织学类型,其中肺腺癌(Luac)最近是最普遍的。肺腺癌被归类为预侵入性,微创和侵入性腺癌。及时,准确地了解肺结核的侵袭性导致适当的治疗计划,并降低了不必要或晚期手术的风险。目前,主要成像模型评估和预测Luacs的侵袭性是胸部CT。然而,基于CT图像的结果是主观的并且与手术切除后提供的地面真理审查相比,患有低精度。本文开发了一种基于预测变压器的框架,称为“CAE变压器”,以对Luacs进行分类。 CAE变换器利用卷积自动编码器(CAE)来自动从CT切片中提取信息性功能,然后将其馈送到修改的变压器模型以捕获全局切片关系。我们的内部数据集114个病理证明的副实体结节(SSN)的实验结果证明了CAE变压器在直方图/基于射频的模型上的优越性及其基于深度学习的对应物,实现了87.73%,灵敏度的准确性使用10倍交叉验证,88.67%,特异性为86.33%和0.913的AUC。
translated by 谷歌翻译
这项研究的目的是开发一个强大的基于深度学习的框架,以区分Covid-19,社区获得的肺炎(CAP)和基于使用各种方案和放射剂量在不同成像中心获得的胸部CT扫描的正常病例和正常情况。我们表明,虽然我们的建议模型是在使用特定扫描协议仅从一个成像中心获取的相对较小的数据集上训练的,但该模型在使用不同技术参数的多个扫描仪获得的异质测试集上表现良好。我们还表明,可以通过无监督的方法来更新模型,以应对火车和测试集之间的数据移动,并在从其他中心接收新的外部数据集时增强模型的鲁棒性。我们采用了合奏体系结构来汇总该模型的多个版本的预测。为了初始培训和开发目的,使用了171 Covid-19、60 CAP和76个正常情况的内部数据集,其中包含使用恒定的标准辐射剂量扫描方案从一个成像中心获得的体积CT扫描。为了评估模型,我们回顾了四个不同的测试集,以研究数据特征对模型性能的转移的影响。在测试用例中,有与火车组相似的CT扫描,以及嘈杂的低剂量和超低剂量CT扫描。此外,从患有心血管疾病或手术病史的患者中获得了一些测试CT扫描。这项研究中使用的整个测试数据集包含51 covid-19、28 CAP和51例正常情况。实验结果表明,我们提出的框架在所有测试集上的表现良好,达到96.15%的总准确度(95%CI:[91.25-98.74]),COVID-119,COVID-96.08%(95%CI:[86.54-99.5],95%),[86.54-99.5],),,),敏感性。帽敏感性为92.86%(95%CI:[76.50-99.19])。
translated by 谷歌翻译
逆转录 - 聚合酶链反应(RT-PCR)目前是Covid-19诊断中的金标准。然而,它可以花几天来提供诊断,假负率相对较高。成像,特别是胸部计算断层扫描(CT),可以有助于诊断和评估这种疾病。然而,表明标准剂量CT扫描对患者提供了显着的辐射负担,尤其是需要多次扫描的患者。在这项研究中,我们考虑低剂量和超低剂量(LDCT和ULDCT)扫描方案,其减少靠近单个X射线的辐射曝光,同时保持可接受的分辨率以进行诊断目的。由于胸部放射学专业知识可能不会在大流行期间广泛使用,我们使用LDCT / ULDCT扫描的收集的数据集进行人工智能(AI)基础的框架,以研究AI模型可以提供人为级性能的假设。 AI模型使用了两个阶段胶囊网络架构,可以快速对Covid-19,社区获得的肺炎(帽)和正常情况进行分类,使用LDCT / ULDCT扫描。 AI模型实现Covid-19敏感性为89.5%+ - 0.11,帽敏感性为95%+ \ - 0.11,正常情况敏感性(特异性)85.7%+ - 0.16,精度为90%+ \ - 0.06。通过纳入临床数据(人口统计和症状),性能进一步改善了Covid-19敏感性为94.3%+ \ - PM 0.05,帽敏感性为96.7%+ \ - 0.07,正常情况敏感性(特异性)91%+ - 0.09,精度为94.1%+ \ - 0.03。所提出的AI模型基于降低辐射暴露的LDCT / ULDCT扫描来实现人级诊断。我们认为,所提出的AI模型有可能协助放射科医师准确,并迅速诊断Covid-19感染,并帮助控制大流行期间的传输链。
translated by 谷歌翻译
As modern data pipelines continue to collect, produce, and store a variety of data formats, extracting and combining value from traditional and context-rich sources such as strings, text, video, audio, and logs becomes a manual process where such formats are unsuitable for RDBMS. To tap into the dark data, domain experts analyze and extract insights and integrate them into the data repositories. This process can involve out-of-DBMS, ad-hoc analysis, and processing resulting in ETL, engineering effort, and suboptimal performance. While AI systems based on ML models can automate the analysis process, they often further generate context-rich answers. Using multiple sources of truth, for either training the models or in the form of knowledge bases, further exacerbates the problem of consolidating the data of interest. We envision an analytical engine co-optimized with components that enable context-rich analysis. Firstly, as the data from different sources or resulting from model answering cannot be cleaned ahead of time, we propose using online data integration via model-assisted similarity operations. Secondly, we aim for a holistic pipeline cost- and rule-based optimization across relational and model-based operators. Thirdly, with increasingly heterogeneous hardware and equally heterogeneous workloads ranging from traditional relational analytics to generative model inference, we envision a system that just-in-time adapts to the complex analytical query requirements. To solve increasingly complex analytical problems, ML offers attractive solutions that must be combined with traditional analytical processing and benefit from decades of database community research to achieve scalability and performance effortless for the end user.
translated by 谷歌翻译
Text-based personality computing (TPC) has gained many research interests in NLP. In this paper, we describe 15 challenges that we consider deserving the attention of the research community. These challenges are organized by the following topics: personality taxonomies, measurement quality, datasets, performance evaluation, modelling choices, as well as ethics and fairness. When addressing each challenge, not only do we combine perspectives from both NLP and social sciences, but also offer concrete suggestions towards more valid and reliable TPC research.
translated by 谷歌翻译
Stance detection (SD) can be considered a special case of textual entailment recognition (TER), a generic natural language task. Modelling SD as TER may offer benefits like more training data and a more general learning scheme. In this paper, we present an initial empirical analysis of this approach. We apply it to a difficult but relevant test case where no existing labelled SD dataset is available, because this is where modelling SD as TER may be especially helpful. We also leverage measurement knowledge from social sciences to improve model performance. We discuss our findings and suggest future research directions.
translated by 谷歌翻译
Entrainment is the phenomenon by which an interlocutor adapts their speaking style to align with their partner in conversations. It has been found in different dimensions as acoustic, prosodic, lexical or syntactic. In this work, we explore and utilize the entrainment phenomenon to improve spoken dialogue systems for voice assistants. We first examine the existence of the entrainment phenomenon in human-to-human dialogues in respect to acoustic feature and then extend the analysis to emotion features. The analysis results show strong evidence of entrainment in terms of both acoustic and emotion features. Based on this findings, we implement two entrainment policies and assess if the integration of entrainment principle into a Text-to-Speech (TTS) system improves the synthesis performance and the user experience. It is found that the integration of the entrainment principle into a TTS system brings performance improvement when considering acoustic features, while no obvious improvement is observed when considering emotion features.
translated by 谷歌翻译
Large language models (LLMs) have been shown to be able to perform new tasks based on a few demonstrations or natural language instructions. While these capabilities have led to widespread adoption, most LLMs are developed by resource-rich organizations and are frequently kept from the public. As a step towards democratizing this powerful technology, we present BLOOM, a 176B-parameter open-access language model designed and built thanks to a collaboration of hundreds of researchers. BLOOM is a decoder-only Transformer language model that was trained on the ROOTS corpus, a dataset comprising hundreds of sources in 46 natural and 13 programming languages (59 in total). We find that BLOOM achieves competitive performance on a wide variety of benchmarks, with stronger results after undergoing multitask prompted finetuning. To facilitate future research and applications using LLMs, we publicly release our models and code under the Responsible AI License.
translated by 谷歌翻译
运行时验证(RV)有可能使安全关键系统的安全操作太复杂而无法正式验证,例如机器人操作系统2(ROS2)应用程序。编写正确的监视器本身可能很复杂,监视子系统中的错误威胁着整个任务。本文概述了一种正式的方法,该方法是根据用结构化的自然语言编写的要求为自动驾驶机器人生成运行时监视器的。我们的方法通过OGMA集成工具将正式需求启发工具(FRET)与Copilot(运行时验证框架)集成在一起。 FRET用于用明确的语义指定需求,然后将其自动转化为时间逻辑公式。 OGMA从FRET输出中生成监视规格,该规范已编译为硬实时C99。为了促进ROS2中的显示器的集成,我们已经扩展了OGMA,以生成定义监视节点的ROS2软件包,该节点在新数据可用时运行监视器,并发布任何违规结果。我们方法的目的是将生成的ROS2软件包视为黑匣子,并以最小的努力将它们集成到更大的ROS2系统中。
translated by 谷歌翻译